4 research outputs found

    LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images

    Get PDF
    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy

    Automatically computed rating scales from MRI for patients with cognitive disorders

    No full text
    Objectives: The aims of this study were to examine whether visual MRI rating scales used in diagnostics of cognitive disorders can be estimated computationally and to compare the visual rating scales with their computed counterparts in differential diagnostics. Methods: A set of volumetry and voxel-based morphometry imaging biomarkers was extracted from T1-weighted and FLAIR images. A regression model was developed for estimating visual rating scale values from a combination of imaging biomarkers. We studied three visual rating scales: medial temporal lobe atrophy (MTA), global cortical atrophy (GCA), and white matter hyperintensities (WMHs) measured by the Fazekas scale. Images and visual ratings from the Amsterdam Dementia Cohort (ADC) (N = 513) were used to develop the models and cross-validate them. The PredictND (N = 672) and ADNI (N = 752) cohorts were used for independent validation to test generalizability. Results: The correlation coefficients between visual and computed rating scale values were 0.83/0.78 (MTA-left), 0.83/0.79 (MTA-right), 0.64/0.64 (GCA), and 0.76/0.75 (Fazekas) in ADC/PredictND cohorts. When performance in differential diagnostics was studied for the main types of dementia, the highest balanced accuracy, 0.75–0.86, was observed for separating different dementias from cognitively normal subjects using computed GCA. The lowest accuracy of about 0.5 for all the visual and computed scales was observed for the differentiation between Alzheimer’s disease and frontotemporal lobar degeneration. Computed scales produced higher balanced accuracies than visual scales for MTA and GCA (statistically significant). Conclusions: MTA, GCA, and WMHs can be reliably estimated automatically helping to provide consistent imaging biomarkers for diagnosing cognitive disorders, even among less experienced readers. Key Points: • Visual rating scales used in diagnostics of cognitive disorders can be estimated computationally from MRI images with intraclass correlations ranging from 0.64 (GCA) to 0.84 (MTA). • Computed scales provided high diagnostic accuracy with single-subject data (area under the receiver operating curve range, 0.84–0.94)
    corecore